This is where defragmentation comes to our rescue for SSD.


Apacer introduced its first SSD bundled with optimization software to address, among other things, the problem of file system fragmentation. According to Apacer's SSD+Optimizer White Paper, by applying defragmentation algorithms specially tailored for SSD, the Optimizer software can restore read operation 5.9x faster, write operation 19.5x faster, random read 3.9x faster, and random write 9.0x faster. Notice how file system fragmentation can degrade system performance in this test case. In a severely fragmented file system, sequential read can degrade to the level of random write, and sequential write and random write become extremely sluggish. Here we see to what extent SSD may suffer from file system fragmentation, and how defragmentation can bring performance back.

Benchmark scores on Apacer 8GB SATA SSD with HDBench™ software

Now comes a problem. Since defragmentation routine will move file fragments in trying to consolidate files and free space, it causes additional write operations to the SSD NAND flash. But NAND flash can only allow for a limited number of erase/write cycles for each memory unit—a life time issue. If you write too often, your flash will die out soon. It appears that, by shuffling files, defragmentation will increase the erase counts and shorten the life span of SSD.

Indeed, having limited erase/write cycles is SSD's weak spot. How to minimize erase count in flash has been one of the key research areas for flash vendors. It concerns, at the level of memory controller and L2P table, how efficiently memory controller manages data among memory cells. No matter how good the memory controller can be in reducing its “write amplification” factor, using whatever kind of wear-leveling algorithms, the memory controller in their present architecture seems not capable of doing anything to cope with the increase of erase/write cycles caused by the I/O multiplication in a fragmented file system.   

Thus we are in a dilemma, On the one hand, we needs to defrag our SSD to improve its performance. On the other hand, we cannot defrag too much lest too many writes shorten SSD's longevity. Traditional defragmentation aims to clear all fragments found in a target disk. But this strategy cannot work for SSD because of the life time issue. The solution to this dilemma is certainly one of balance and compromise. That is, how to devise a defrag strategy that will improve throughput while at the same time not incurring too many erase counts? 

For Apacer, a better defrag strategy is to prevent free space fragmentation from occurring without being too aggressive in consolidation files. Since free space fragmentation leads to file fragmentation, if free space fragments can be minimized, the chances of creating fragmented files can also be minimized. If such a defragmentation strategy is applied at initial stage of SSD, then file system can optimally maintained in a less fragmented state. This helps to turn fragmentation vicious circle into a virtuous one. On the other hand, by consolidating files only when needed, erase counts used in consolidating files are minimized.

Shown below is a test case done by Apacer where cumulative erase counts of a SSD test sample going through a series of procedures is documented. In this case fragmented free space is artificially created at stage 2 and stage 5. Optimization algorithm is applied at stage 6. HDBENCH is first run in a fragmented free space at stage 3, and then in an optimized, defragmented, space at stage 7. Defragmentation does incur erase count increase for 4. Running HDBENCH incurs 4 counts in a fragmented space, but only 1 in a defragmented space. That is, the optimization algorithm reduces HDBENCH erase count from 4 to 1.

Cumulative Erase activity on Apacer 8GB PATA SSD as measured by SSDlife software

Based on the test, we can argue that although optimization will use some erase counts in consolidating fragments, it reduces erase counts incurred by other write activities after it because fragments are already cleared. The total erase counts incurred in the case with optimization are only a fraction more than the total erase counts incurred in the case without optimization. It is as if applying optimization saves a number of erase counts from later write activities only to use them in advance. Since erase counts accumulate differently in different user scenarios, it is conceivable that the total erase counts incurred in the case with optimization can be same as, or even less than, the total erase counts incurred in the case without optimization. This means a well designed defrag algorithm can extend SSD's life span.

To conclude. SSD is fast, but only theoretically. In real use, SSD suffers from file system fragmentation just like HDD and performance can degrade severely. Although defragmentation can bring performance back, care must be taken not to incur too much write cycles in clearing file and free space fragments.

 

 
 
www.apacer.com | © 2008 Apacer Technology Inc. All Rights Reserved.